Extracting full-field subpixel structural displacements from videos via deep learning
نویسندگان
چکیده
Conventional displacement sensing techniques (e.g., laser, linear variable differential transformer) have been widely used in structural health monitoring the past two decades. Though these are capable of measuring time histories with high accuracy, distinct shortcoming remains such as point-to-point contact which limits its applicability real-world problems. Video cameras years due to advantages that include low price, agility, spatial resolution, and non-contact. Compared target tracking approaches digital image correlation, template matching, etc.), phase-based method is powerful for detecting small subpixel motions without use paints or markers on structure surface. Nevertheless, complex computational procedure real-time inference capacity. To address this fundamental issue, we develop a deep learning framework based convolutional neural networks (CNNs) enable extraction full-field displacements from videos. In particular, new CNN architectures designed trained dataset generated by motion single lab-recorded high-speed video dynamic structure. As only reliable regions sufficient texture contrast, sparsity field induced mask considered via network architecture design loss function definition. Results show that, supervision full sparse field, identifying pixels contrast well their motions. The performance tested various videos other structures extract histories), indicates generalizability accurately contrast.
منابع مشابه
Extracting Visual Patterns from Deep Learning Representations
Vector-space word representations based on neural network models can include linguistic regularities, enabling semantic operations based on vector arithmetic. In this paper, we explore an analogous approach applied to images. We define a methodology to obtain large and sparse vectors from individual images and image classes, by using a pre-trained model of the GoogLeNet architecture. We evaluat...
متن کاملUnsupervised learning from videos using temporal coherency deep networks
In this work we address the challenging problem of unsupervised learning from videos. Existing methods utilize the spatio-temporal continuity in contiguous video frames as regularization for the learning process. Typically, this temporal coherence of close frames is used as a free form of annotation, encouraging the learned representations to exhibit small differences between these frames. But ...
متن کاملExtracting Moving People from Internet Videos
We propose a fully automatic framework to detect and extract arbitrary human motion volumes from real-world videos collected from YouTube. Our system is composed of two stages. A person detector is first applied to provide crude information about the possible locations of humans. Then a constrained clustering algorithm groups the detections and rejects false positives based on the appearance si...
متن کاملExtracting Action Sequences from Texts Based on Deep Reinforcement Learning
Extracting action sequences from texts in natural language is challenging, which requires commonsense inferences based on world knowledge. Although there has been work on extracting action scripts, instructions, navigation actions, etc., they require either the set of candidate actions is provided in advance, or action descriptions are restricted in a specific form, e.g., description templates....
متن کاملDeep Learning for Extracting Water Body from Landsat Imagery
There are regional limitations in traditional methods of water body extraction. For different terrain, all the methods rely heavily on carefully hand-engineered feature selection and large amounts of prior knowledge. Due to the difficulty and high cost in acquiring, the labeled data of remote sensing is relatively small. Thus, there exist some challenges in the classification of huge amount of ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Sound and Vibration
سال: 2021
ISSN: ['1095-8568', '0022-460X']
DOI: https://doi.org/10.1016/j.jsv.2021.116142